Dimensionality Reduction via Multiple Locality-Constrained Graph Optimization
نویسندگان
چکیده
منابع مشابه
Classification Constrained Dimensionality Reduction
Dimensionality reduction is a topic of recent interest. In this paper, we present the classification constrained dimensionality reduction (CCDR) algorithm to account for label information. The algorithm can account for multiple classes as well as the semi-supervised setting. We present an out-of-sample expressions for both labeled and unlabeled data. For unlabeled data, we introduce a method of...
متن کاملGraph optimization for dimensionality reduction with sparsity constraints
Graph-based dimensionality reduction (DR) methods play an increasingly important role in many machine learning and pattern recognition applications. In this paper, we propose a novel graph-based learning scheme to conduct Graph Optimization for Dimensionality Reduction with Sparsity Constraints (GODRSC). Different from most of graph-based DR methods where graphs are generally constructed in adv...
متن کاملDimensionality reduction via discretization
The existence of numeric data and large amounts of records in a database pose a challenging task to explicit concepts extraction from the raw data. This paper introduces a method that reduces data vertically and horizontally, keeps the discriminating power of the original data, and paves the way for extracting concepts. The method is based on discretization (vertical reduction) and feature sele...
متن کاملDimensionality Reduction on Grassmannian via Riemannian Optimization: A Generalized Perspective
This paper proposes a generalized framework with joint normalization which learns lower-dimensional subspaces with maximum discriminative power by making use of the Riemannian geometry. In particular, we model the similarity/dissimilarity between subspaces using various metrics defined on Grassmannian and formulate dimensionality reduction as a non-linear constraint optimization problem conside...
متن کاملDimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization
Stochastic optimization naturally arises in machine learning. E cient algorithms with provable guarantees, however, are still largely missing, when the objective function is nonconvex and the data points are dependent. This paper studies this fundamental challenge through a streaming PCA problem for stationary time series data. Specifically, our goal is to estimate the principle component of ti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2018
ISSN: 2169-3536
DOI: 10.1109/access.2018.2871884